- Jahangeer Mohiudin Lone
- Aaqib Rshid
- Vinay Sahu
- Rinky Sahu
- Shikha Mishra
- Shubhra Tiwari
- Gaurav Gulhare
- S. R. Tandan
- Rohit Miri
- Glori Gunjan Bagh
- Bhoopendra Dhar Diwan
- Anirudh Kumar Tiwari
- Fayaz Ahmad Lone
- Dilip K. U. Barik
- Rachna Verma
- Amrita Verma
- Priti Verma
- Kamal Mehta
- Sumati Gauraha
- Anand Parihar
- Somen Roy
- Shweta Dubey
- Raksha Shukla
- Nilmani Verma
- Sanjay Kumar
- B. D. Diwan
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z All
Diwan, Tarun Dhar
- Security on Mobile Agent Based Communication System
Authors
1 Deptt. of Engineering, Dr. C. V. Raman University, Bilaspur (C.G), IN
2 Dr. C. V. Raman University, Bilaspur (C.G), IN
Source
Wireless Communication, Vol 5, No 7 (2013), Pagination: 305-308Abstract
In this paper, as signatures continue to play an important role in financial, commercial and legal transactions, truly secured authentication becomes more and more crucial. to perform verification or identification of a signature, several steps must be performed. online signature verification has been shown to achieve much higher verification rate than offline verification this paper proposes a novel framework for online signature verification. Different from previous methods, our approach makes use of online handwriting instead of handwritten images for registration. The online registrations enable robust recovery of the writing trajectory from an input online signature and thus allow effective shape matching between registration and verification signatures. In addition, the online registrations enable robust recovery of the writing trajectory from an input online signature and thus allow effective shape matching between registration and verification signatures. in addition, the features have been calculated using 16 bits fixed-point arithmetic and tested with different classifiers, such as hidden markov models, support vector machines, and euclidean distance classifier. We propose several new techniques to improve the performance of the new signature verification rate system.Keywords
Verification Rate, Verification Rate, Hand Writing Recognition, Training Data, Testing Data.- Application of Two Level Securities for Data by Combining Symmetric Key Encryption and Audio Steganography
Authors
1 Deptt. of Engineering, Dr. C. V. Raman University, Bilaspur (C.G), IN
2 Dr. C. V. Raman University, Bilaspur (C.G), IN
Source
Software Engineering, Vol 5, No 8 (2013), Pagination: 278-281Abstract
In this paper, the proposed method hides the secret message based on searching about the identical bits between the secret messages and image pixels values. Many different carrier file formats can be used, but digital images are the most popular because of their frequency on the internet. For hiding secret information in images, there exists a large variety of steganography techniques some are more complex than others and all of them have respective strong and weak points The first to employ hidden communications techniques -with radio transmissions- were the armies, because of the strategic importance of secure communication and the need to conceal the source as much as possible. Nowadays, new constraints in using strong encryption for messages are added by international laws, so if two peers want to use it, they can resort in hiding the communication into casual looking data. This problem has become more and more important just in these days, with which around thirty of the major - with respect to technology - countries in the world methods are discussed and analyzed based on the ratio between the number of the identical and the non identical bits between the pixel color values and the secret message values This property is used for proposed image encryption and for steganography to increase the security level of the encoded image and to make it less visible.Keywords
Surveillance Information, Segmenting, Object Tracking, Significant Percentage, Auto-Calibration.- Implementation of Object Oriented Programming Technique to Generate Optimal Test Cases
Authors
1 Deptt. of Engineering, Dr. C. V. Raman University, Bilaspur (C.G), IN
2 Dr. C. V. Raman University, Bilaspur (C.G), IN
Source
Software Engineering, Vol 5, No 7 (2013), Pagination: 258-260Abstract
Test Cases are generated with help of Object, Sequence, Activity, Collaboration, State-Chart diagrams. Numerous applications developed with help activity diagram to generate test cases. Lacking in generalization and automation. These are two basic pit falls of present procedure to software testing. Basically testing done manually even for test cases generation. When it is being automated it fails to be generalized only for that particular or some application it fits. Validation of obtained test cases is manual. In case automated system obtained test cases should be validated manually to use such test cases to analysis the software. Test Cases are generated based on code by which the application being developed. Existing system based on code analysis certainly depend on language used and application. Generalization is quite difficult. Since test case generation from design specifications has the added advantage of allowing test cases to be available early in the software development cycle, UML Class diagram is used. Implementation of object oriented programming technique to generate optimal test cases. Validation of test cases is automated. Evolutionary Genetic algorithm is used generate valid and optimal patterns necessary. Test cases can be easily generated in case of regression testing. Even in case of reengineering of software, updates once reflected in design specification automatically new set test cases can be generated. Reduce complexity to analysis the code.Keywords
UML Class, Object-Oriented Programming, Testing, Verification Method.- Implement to Increase the Safety of Railway Vehicles with Real Time Warning Facility
Authors
1 Dept. of Engineering (CSE), Dr. C. V. Raman University, Bilaspur (C.G), IN
2 Dept. of Engineering (CSE), Dr. C. V. Raman University, Bilaspur (C.G), IN
3 Dr. C. V. Raman University, Bilaspur (C.G), IN
Source
Networking and Communication Engineering, Vol 5, No 8 (2013), Pagination: 402-406Abstract
In This Paper as it is known that Railways are the second fastest means of transport in the country and therefore it demands great security and safety. The main aim or objective is to prevent all the passengers from any sudden accidents due to rail crack that can leads to derailment. In this particular report the combination of nadal’s theory and the lateral and vertical vibrations of ride comfort have been calculated. On the basis of this information, the frequency of vibrations can be analyzed and efforts can be made to cease any sort of accidents. As it is closely associated with passenger and cargo transportation, it owns high risk in terms of human lives and cost of assets. New technologies and better safety standards are constantly introduced but still accidents do occur. There will always be some risk associated with derailments and collisions but it can be reduced by detailed research of the ischolar_main causes, Indian Railway transport is one of the major modes of transportation, so it must offer high comfort level for the passengers and the staff. However, the comfort that passengers experience is a highly complex and individual phenomenon. In several researches, noise and vibration have been identified as the most important factors for high comfort. The main sources of vibration in a train are track defects, that causes wheel flange climb or rail rollover. The nature of vibration itself is random and covers a wide frequency range. The improvement of passenger comfort while travelling has been the subject of intense interest for many train manufacturers, researchers and companies all over the world. Although new techniques in manufacturing and design ensure better ride quality in railway carriages, it is sometimes impossible to completely eliminate track defects or various ground irregularities.Keywords
Vibration Monitoring, Dynamic Control, Measuring Motion, Acceleration, Inclination, Graphical User Interface.- Dynamic Channel Allocation in Mobile Network by Fuzzy Logic
Authors
1 Department of Engineering, Dr.C.V. Raman University, Bilaspur, IN
Source
Networking and Communication Engineering, Vol 4, No 11 (2012), Pagination: 689-694Abstract
The fuzzy means of allocating WDM (wavelength division multiplexing) channels in a hierarchical all-optical network (AON) for the modified token medium access protocol is addressed. The goal is to minimize the average delay of local subnet and global bound traffic, and to maximize the number of nodes that can be supported by the network. This is achieved by allotting a minimum number of spatially-reuse channels to the subnets, which can accommodate a certain maximum number of nodes. Actually the minimum number of nodes that are sought for each subnet in terms of cost. By working out the maximum number of nodes for each subnet and the total subnets that can be supported, the optimum number of global channels and the overall total number of nodes for the entire network, can hence be determined. The packet generation rate and average delay in slot time are used to gauge the performance of the fuzzy channel allocation model. Recent demand for mobile telephone service has been growing rapidly while the electromagnetic spectrum of frequencies allocated for this purpose remains limited. Channel allocation schemes provide a flexible and efficient access to bandwidth in wireless and mobile communication system. In this paper, distributed dynamic channel allocation algorithm is the spatial distribution of channel demand changes with time, the spatial distribution of allocated channels adjusts accordingly. The algorithm guarantees relaxed mutual exclusion and provide necessary condition for information structure. The algorithm is deadlock free, starvation free and prevents co-channel interference.
Keywords
Fuzzy Solution, Fuzzy Optimization, Dynamic Channel Allocation, Grid Cellular System, Fuzzy Set, Node, Subnet, Global Channel, Global Delay, Local Delay, Local Channel.- Saving Lives by Counting Blinks While Driving
Authors
1 CS & IT Department, Govt. E.R.R.P.G Science College, Bilaspur (C.G.), IN
2 Dr. C. V. Raman University Kota (C.G), IN
Source
Digital Image Processing, Vol 7, No 6 (2015), Pagination: 167-171Abstract
In this paper we have analyzed the behaviour of eye movement for prevention of vehicle’s accidental situation in motor driving. We are analyzing eye blinks in normal and in driving conditions. One of the biggest problems in India and the World are road accidents. Driver’s non alertness is a prime cause for most accidents related to automobiles crashes. Accidents caused in dozed state are more severe, because of very high speeds involved and hence the driver fails to take any protective action just before collision. The state of drowsiness can be very easily detected by watching driver’s eyes. In this paper we are considering blinks of eyes. Blinks may be voluntary or involuntary. Both voluntary and involuntary blinks are taken under consideration. As the vigilance level of driver changes number of blinks also changes. The number of blinks in normal and in driving situations is the basis for judging whether driver is drowsy or not.
Keywords
Template, Frames, Blinks, Testing, Automatically, Human Computer Interface, Drowsiness.- Current Approaches and Challenges for Skin Cancer Detection
Authors
1 Deptt. of Engineering (CSE), Dr. C. V. Raman University, Bilaspur (C.G), IN
2 Deptt. of Engineering (CSE),Dr. C. V. Raman University, Bilaspur (C.G), IN
3 Dr. C. V. Raman University, Bilaspur, (C.G), IN
Source
Digital Image Processing, Vol 6, No 1 (2014), Pagination: 33-36Abstract
within the last several years, Cancer detected in medical sector has been very active research area of computer vision. Cancer can be recognition in a number of different ways, following the presence of certain signs and symptoms, screening tests, or medical imaging. A possible cancer is recognition it is diagnosed by microscopic examination of a tissue sample. Radiation therapy at some point during the treatment process. While cancer can be affected male and female of all ages, and a cancer are more affected common in children, most cases the risk of improving cancer generally increases with age. In 2007, cancer caused about 13% of all human deaths worldwide (8million). The most common treatment modalities for cancer are surgery, radiation therapy, and systemic therapy (i.e., drugs). These medical interventions have documented survival advantages, but the implications for QOL are not trivial. Surgery is performed on about 60% of cancer survivors Rates are raising as more people live to an old age and as mass lifestyle changes occur in the developing world. The growing risk of cervical cancer in India makes it necessary to develop methods for early detection of the disease and its subsequent treatment. Lack of knowledge is found to be the main reason for not having had of screening, non availability of female screener, inconvenient clinic times, lack of awareness of the test indication and benefits, considering onset not a risk of developing cancer and fear of embracement , detection of cancer are the factors , which has influenced cervical cancer screening. Over the past decade, the global health community has been giving increased attention to the importance of addressing cervical cancer prevention where the disease burden is greatest.Keywords
Immunotherapy, Biologic Therapies, Cancer Prevention, Cell Transplants, Cancer Survivors.- Improve Efficiency of On-Line Handwriting Recognition Using Hidden Markov Models
Authors
1 Dept. of Engineering, Dr. C. V. Raman University, Bilaspur (C.G), IN
2 Dept. of Basic Sciences, Dr. C. V. Raman University, Bilaspur (C.G), IN
3 Dr. C. V. Raman University, Bilaspur (C.G), IN
Source
Digital Image Processing, Vol 5, No 7 (2013), Pagination: 324-327Abstract
In this paper, as signatures continue to play an important role in financial, commercial and legal transactions, truly secured authentication becomes more and more crucial. to perform verification or identification of a signature, several steps must be performed. Online signature verification has been shown to achieve much higher verification rate than offline verification this paper proposes a novel framework for online signature verification. Different from previous methods, our approach makes use of online handwriting instead of handwritten images for registration. The online registrations enable robust recovery of the writing trajectory from an input online signature and thus allow effective shape matching between registration and verification signatures. In addition, the online registrations enable robust recovery of the writing trajectory from an input online signature and thus allow effective shape matching between registration and verification signatures. in addition, the features have been calculated using 16 bits fixed-point arithmetic and tested with different classifiers, such as hidden markov models, support vector machines, and euclidean distance classifier. We propose several new techniques to improve the performance of the new signature verification rate system.Keywords
Verification Rate, Verification Rate, Hand Writing Recognition, Training Data, Testing Data.- Robust Tracking and Object Classification Towards Automated Video Surveillance Recognition
Authors
1 Deptt. of Engineering, Dr. C. V. Raman University, Bilaspur (C.G), IN
2 Dr. C. V. Raman University, Bilaspur (C.G), IN
Source
Digital Image Processing, Vol 5, No 7 (2013), Pagination: 347-349Abstract
In this paper, intelligent (smart) surveillance systems, which are now watching the video and providing alerts and content based search capabilities, make the video monitoring and investigation process scalable and effective. The programmed that analyze the video and provide alerts are commonly referred to as video analytics. These are responsible for turning video cameras from a mere data gathering tool into smart surveillance systems for proactive security. Smart surveillance systems have been enabled by the advances in computer vision, video analysis, pattern recognition and multimedia indexing technologies over the past decade. Additional video cameras are necessary to complement the surveillance information, especially for large scale applications. We aim to investigate new techniques and design new intelligent algorithms, which have to make use of partial information from several video sources. One of those is robustness of these techniques to changes in the environment: illumination, size of the objects and many others. Sometimes, additional video cameras are necessary to complement the surveillance information, especially for large scale applications. We aim to investigate new techniques and design new intelligent algorithms, which have to make use of partial information from several video sources. Object tracking data for further scene analysis and understanding.Keywords
Surveillance Information, Segmenting, Object Tracking, Significant Percentage, Auto-Calibration.- Wavelet Image Compression and Parameter Based Measurement
Authors
1 Department Of Engineering, Dr. C. V. Raman University, Bilaspur (C.G), IN
2 Department of Basic Sciences, Dr. C. V. Raman University, Bilaspur (C.G), IN
3 Dr. C. V. Raman University, Bilaspur (C.G), IN
Source
Digital Image Processing, Vol 5, No 5 (2013), Pagination: 264-267Abstract
The development of Digital image processing technology there are several applications. One of the applications is an image compression. The research assigned is to develop an Image Compression technique based on JPEG 2000 using Wavelet Transform For efficient representation of digital image in order to reduce the memory required for storage, improve the data access rate from storage device and reduce the bandwidth and time required for data transfer across communication channel wavelet transform is the best solution.In this research the different types of wavelet function are used. Wavelet transform is used to convert the pixel information into transform coefficient. Transform coefficient are quantized and then entropy coding is performed. For reconstruction entropy decoding and inverse wavelet transform are done. In this project a comparative study has been done using different wavelet function such as Haar, dB4 and dB6 for the compression and reconstruction of the image.
Keywords
Wavelet Transform, Entropy Encoding Entropy Encoding Entropy Encoding, Compression Ratio, Image Processing.- Real Time Eye Template Generation System in an Image Sequence
Authors
1 Dr. CV Raman University, Bilaspur, IN
Source
Digital Image Processing, Vol 4, No 11 (2012), Pagination: 575-578Abstract
Eye blinking is one of the prominent areas to solve many real world problems. The process of blink detection consists of two phases. These are eye tracking followed by detection of blink. The work that has been carried out for eye tracking only is not suitable for eye blink detection. Therefore some approaches had been proposed for eye tracking along with eyes blink detection. This paper implements one of the approaches given by Michael. The result of template creation accuracy and total blink detection to count number of eye blinks in an image sequence. Online template is completely independent of any past templates that may have been created during the run of the system. At last after analyzing all these approaches some of the parameters we obtained on which better performance of eye blink detection algorithm depend.
Keywords
Template, Frames Interface, Testing, Automatically, Involuntary.- Eye Tracking and Detection by using Template Generation and Parameter based Judgment
Authors
1 Dr. CV Raman University, Bilaspur, IN
Source
Digital Image Processing, Vol 4, No 14 (2012), Pagination: 773-775Abstract
The eyes are tracked and correlation scores between the actual eye and the corresponding "closed-eye" template are used to detect blinks. Eye blinking is one of the prominent areas to solve many real world problems. The work that has been carried out for eye tracking only is not suitable for eye blink detection. Stored template for a particular depth is chosen. Once the template is chosen and the system is in operation, the subject will be restricted to be at the specified distance. Another disadvantage of the system is that changing camera Positions require the whole system to be retrained the process of blink detection consists of two phases. These are eye tracking followed by detection of blink. The work that has been carried out for eye tracking only is not suitable for eye blink detection. Therefore some approaches had been proposed for eye tracking along with eyes blink detection. This paper implements one of the approaches given by Michael et al [1]. Further more the result of template creation accuracy and total blink detection to count number of eye blinks in an image sequence. Online template is completely independent of any past templates that may have been created during the run of the system. At last after analyzing all these approaches some of the parameters we obtained on which better performance of eye blink detection algorithm depend.
Keywords
Template, Frames, Interface, Testing, Automatically, Involuntary.- An Intelligent Decision Support System Model for Health Care Planners Using Fuzzy Logic
Authors
1 Department of Engineering, Dr. C.V. Raman University, Bilaspur, IN
Source
Fuzzy Systems, Vol 4, No 8 (2012), Pagination: 314-316Abstract
The main objective of this research work is to evolve fuzzy logic through improvement Intelligent Decision Support System Model for Health Care Planners so that novel & hidden knowledge patterns can be generated to automate & quicken the process of decision making in clinical diagnosis as well as other domains of health care management. In the healthcare sector quality demands are rising for designing expert systems for medical diagnosis. At the same time growing capture of biological, clinical, administrative data and integration of distributed and heterogeneous databases create a completely new base for medical quality and cost management. Against this background we applied intelligent data mining methods for analyzing medical repositories. In order to reach the main goal of the research, applications of fuzzy logic are to be explored on medical databases to discover knowledge. we have used databases namely health disease, Immunization Program (Polio Database), and Medical Dataset of Patients collected from free Internet repository, Public health care sector and from renowned private nursing homes. This implementation uses innovatively designed fuzzy logic rules so that it can be queried and then consulted in a proper, defined way helping in beneficial future analysis. This campaign can classified immunization programmers into Efficient, Satisfactory and Poor categories based on the number of children immunized and number of houses left subsequently with the observation for improvement. Analysis of this type of classification facilitates identification of areas needing attention and more resources to improve performance. These experiments can work synergistically which can produce single knowledge pattern or different pieces of knowledge patterns so that health care planners can take advantage of this knowledge discovery to lower healthcare costs while improving healthcare quality. The results of the experiment shows that fuzzy logic integrated knowledge discovery on immunization data helps decision makers to improve the efficiency of Immunization Programmers in Indian States by proper monitoring and categorization of the health centers, supervisors of health schemes, and their performances. These results can simulate the answer of the research question. Can we provide optimized answers to a medical problem which is imprecise, partial truth and uncertain.Keywords
Health Care, Knowledge Discovery, Fuzzy Logic, Intelligent Decision Support System, Immunization, Classification.- A New Technique for Relational Database Protection Using Digital Watermarking
Authors
1 Deptt. of Engineering, Dr. C. V. Raman University, Bilaspur (C.G), IN
2 Dr. C. V. Raman University, Bilaspur (C.G), IN
Source
Data Mining and Knowledge Engineering, Vol 5, No 7 (2013), Pagination: 278-281Abstract
In watermark embedding database is embedded into image. Here first we store binary data of image in one data structure (suppose array) and database content in other array. Then copy content of image and database in third array. Watermarking algorithm which is used to recover a database from updated value. This is a web based watermarking technique, in which if software is stored in server then we can access it from client system. Digital watermarking can be used to embed various types of data, depending on the particular application and intended use. For example, a watermark in a digital movie file might simply identify the name or version of the movie. Alternatively, it might convey copyright or licensing information from the movie's creator. Or it might embed a customer or transaction number that could be used to identify individual payment or transaction data relating to that particular copy of the movie. But the number of bits that can be contained in a watermark itself today is typically modest enough to provide some basic codes or identifiers.Keywords
Binary Data, Transaction Number, Parser, Hybrid Methods, Automatically.- Improve the Accuracy and Efficiency of Medical Diagnosis Analysis Using Knowledge Discovery
Authors
1 Deptt. of Engineering (CSE), Dr. C. V. Raman University, Bilaspur (C.G), IN
2 Dr. C. V. Raman University, Bilaspur (C.G), IN
Source
Data Mining and Knowledge Engineering, Vol 5, No 8 (2013), Pagination: 319-322Abstract
The main objective Computer-based support in health care is becoming ever more important. No other domain has so many innovative changes that have such a high social impact. There has already been a long standing tradition for computer-based decision support, dealing with complex problems in medicine such as diagnosing disease, managerial decisions and assisting in the prescription of appropriate treatment. The Healthcare industry is among the most information intensive industries. Medical information, knowledge and data keep growing on a daily basis. It has been estimated that an acute care hospital may generate five terabytes of data a year. The ability to use these data to extract useful information for quality healthcare is crucial. Computer assisted information retrieval may help support quality decision making and to avoid human error. Although human decision-making is often optimal, it is poor when there are huge amounts of data to be classified. Also efficiency and accuracy of decisions will decrease when humans are put into stress and immense work. Imagine a doctor who has to examine 5 patient records; he or she will go through them with ease. But if the number of records increases from 5 to 50 with a time constraint, it is almost certain that the accuracy with which the doctor delivers the decisions will not be as high as the ones obtained when he had only five records to be analyzed.Keywords
Health Care, Natural Language Processing, Fuzzy Logic, Classification, Intelligent Decision Support System, Identifying Patients, Diagnosis.- Personal Identification Systems
Authors
1 Dr. C.V. Raman University, Bilaspur, IN
Source
Data Mining and Knowledge Engineering, Vol 4, No 6 (2012), Pagination: 295-297Abstract
This paper presents secure platform of identification is very useful for identification of human body like speech recognition, face recognition and detection, fingerprints recognition and Eye detection recognition and speech recognition. All those method are very essential method in personal identification to indentify all type of behavior of humans in order to derive correct data set. Basically valid personal identity of any person to use, enter valid data or to derived and gather correct information based data set is a big challenge in Personal identification. Therefore face recognition, and identification can gives very powerful tool to solve all personal identification matter.Keywords
Face Recognition, Irises, Fingerprint, Recognition, Speech Recognition, Signature.- Knowledge of Word Alignment Position Related To Parallel English-Hindi Sentences
Authors
1 Deptt. of Engineering (CSE), Dr. C. V. Raman University, Bilaspur (C.G), IN
2 Dr. C. V. Raman University, Bilaspur (C.G), IN
Source
Artificial Intelligent Systems and Machine Learning, Vol 5, No 8 (2013), Pagination: 366-370Abstract
This paper is based on knowledge of word alignment position related to parallel English Hindi sentences. This methodology is base to develop the parallel English Hindi word dictionary, after syntactically and semantically analysis of the English Hindi source text. Proposed system is using for aligning the English and Hindi sentences the methodology can be used for other languages. Large parallel corpus of english-hindi pair language is not frequently available. Solve this problem used two strategies. First is normalization of tagged English sentences and Hindi sentences. Second technique is mapping english-hindi sentence using parallel english-hindi word dictionary. Hence proposed system is desirable to encourage English and Hindi parallel sentences. A more detailed explanation is done for rule-based (constraint-based) part-of-speech tagging and morphological disambiguation. Some rule-based part-of-speech tagging studies on Turkish language are presented. Besides, there is an attached to the paper. English language recommend monolingual dictionaries as a source of important information concerning grammar information, collocations, spelling, pronunciation, context and etymology of words. There are a large number of materials which help students and teachers to work with dictionaries. However, not all of these materials can be used at lower secondary schools, because the situation at many primary and secondary schools is different. Results indicate that by combining these hand-crafted, statistical and learned information sources, a recall of 96 to 97% with a corresponding precision of 93 to 94% and ambiguity of 1.02 to 1.03 parses per token, on test texts is attained. However the impact of the rules that are learned is not significant as handcrafted rules do most of the easy work at the initial stages.Keywords
Handcrafted Rules, Monolingual, Parser, Automatically, Speech Tagging, Parallel Sentences.- Sentence Boundary Detection Using Maximum Entropy Model
Authors
1 Deptt. of Engineering, Dr. C. V. Raman University, Bilaspur (C.G), IN
2 Dr. C. V. Raman University, Bilaspur (C.G), IN
3 Sagar Institute of Sciences and Technology, Bhopal, IN
Source
Artificial Intelligent Systems and Machine Learning, Vol 5, No 7 (2013), Pagination: 302-306Abstract
Sentence boundary detection system has three independent applications (Rule-based, HMM, and Maximum Entropy). Maximum Entropy Model is the central part of this system, which achieved an error rate less than 2% on part of the Wall Street Journal (WSJ) Corpus with only eight binary features. The performance of the three applications is illustrated and discussed. Sentence boundary disambiguation is the task of identifying the sentence elements within a paragraph or an article. Because the sentence is the basic textual unit immediately above the word and phrase, Sentence Boundary Disambiguation (SBD) is one of the essential problems for many applications of Natural Language Processing – Parsing, Information Extraction, Machine Translation, and Document Summarizations. The accuracy of the SBD system will directly affect the performance of these applications. However, the past research work in this field has already achieved very high performance, and it is not very active now. The problem seems too simple to attract the attention of the researchers.Keywords
Sentence Boundary Disambiguation, Maximum Entropy Model, Features, Generalized Iterative Scaling, Hidden Markov Model.- Measurement of Eye Blinking Through Intel Microprocessor for Safety Driving
Authors
1 Deptt. of Engineering, Dr. C. V. Raman University, Bilaspur (C.G), IN
2 Dr. C. V. Raman University, Bilaspur (C.G), IN
3 Sagar Institute of Sciences and Technology, Bhopal, IN
Source
Artificial Intelligent Systems and Machine Learning, Vol 5, No 7 (2013), Pagination: 307-310Abstract
Driver in-alertness is an important cause for most accidents related to vehicle crashes. Drowsy driver detection methods can form the basis of a system to potentially reduce accidents related to driver doziness. Intel-Eye describes a real-time driver in alertness and shock related facial expression monitoring. Intel -Eye obtains visual cues such as eyelid movement; gaze movement, head movement, and facial expression that typically characterize the level of alertness of a person are extracted in real time and systematically combined to infer the fatigue level of the driver. Intel-eye distinguishes itself by the Two-Way Approach in eye gaze analysis. Shock analysis is done to identify the driver’s expression and signals are sent for automatic braking system. A probabilistic model is developed to model in Intel-Eye and it is used for predicting human in-alertness based on the visual cues obtained. But this model is mainly focus on eye blinking because this system is connected with programming by using METLAB. I have used MATLAB programming to make an automatic eye blink tracking and detection system for a video. The eyes are tracking by image sequence, the eyes are tracked and correlation scores between the actual eye and the corresponding “closed-eye” templates, which are used to detect blinks. Accurate head and eye tracking results are obtained at a processing rate of more than 30 frames per second (fps), in more than 90% cases with a low false positive blink detection of 0.01%. I take ideal eye blinking rate in first 10 minutes in driving of human, then we observe the changes rate in blinking frequencies. We observed that the dangerous condition occurs when the eye blinking rate are decreases (increase) as 50% (50%), 75% (100%), 100% (300%) from natural condition for lower , medium and higher level dangerous conditions respectively.Keywords
Measurement of Eye Blinking, Sensing of physiological characteristics, Advanced Emergency Braking Systems (AEBS), Electronic Stability Control.- Implementation for Illustrative Sentences for English Multiword Expressions
Authors
1 Deptt. of Engineering, Dr. C. V. Raman University, Bilaspur (C.G), IN
2 Dr. C. V. Raman University, Bilaspur (C.G), IN
Source
Artificial Intelligent Systems and Machine Learning, Vol 5, No 7 (2013), Pagination: 311-314Abstract
Recognizing multiword expressions is the fundamental task of the field. It could be potentially successfully approached in a supervised way, i.e. using a manually annotated training corpus to learn the characteristics (features) of multiword expressions as far as their structure and contextual environment is concerned. In succession, this knowledge would be used so as to locate multiword expressions that occur in another annotated text. The symbolic and statistical methods has been apparent in natural language processing (NLP) for some time. Multiword expressions are a key problem for the development of large-scale, linguistically sound natural language processing technology. We propose a method to search for illustrative sentences for English multiword expressions (MWEs) from a research paper database. We focus on syntactically flexible expressions such as “a lot of-work.” Traditionally, illustrative sentences that contain such expressions have been searched for by limiting the maximum number of words between the component words of the MWE. However, this method could not collect enough illustrative sentences in which clauses are inserted between component words of MWEs. We therefore devised a measure that calculates the distance between component words of an MWE in a parse tree, and use it for flexible expression search. We conducted experiments.Keywords
Automated Testing, Illustrative Sentences, Component Words, Multiword Expressions, Contextual Environment.- Multiple uses of Query Optimization Technique
Authors
1 Deptt. of Engineering, Dr. C. V. Raman University, Bilaspur (C.G), IN
2 Deptt. of Basic Sciences, Dr. C. V. Raman University, Bilaspur (C.G), IN
3 Dr. C. V. Raman University, Bilaspur (C.G), IN
Source
Artificial Intelligent Systems and Machine Learning, Vol 5, No 5 (2013), Pagination: 233-237Abstract
Query optimization is the process of selecting the most efficient query-evaluation plan from many strategies usually possible for processing a given query if the query is complex. Where the system attempts to-find an expression that is equivalent to given application, but more efficient to execute. Human queries are rarely crisp which poses challenges in efficient answer formation and data retrieval. Now various work is shifting toward hybrid methods that combine new empirical corpus-based methods, including the use of probabilistic and information theoretic techniques, with traditional symbolic methods. Need of Natural Language Query Processing is for an English sentence to be interpreted by the computer and appropriate action taken. Question answering to any databases in natural language is a very convenient and easy method of data access, especially for casual users who do not understand complicated database query languages such as SQL so this paper proposes the architecture for translating English Query into SQL using Semantic Grammar.Keywords
Semantic, Speech Tagging, Parser, Automatically, Hybrid Methods.- Knowledge of Word Alignment Position in English-Hindi Sentences
Authors
1 Sun Engineering College, Bhilai, IN
2 Dr. CV Raman University, Bhilai, IN
Source
Artificial Intelligent Systems and Machine Learning, Vol 4, No 9 (2012), Pagination: 520-522Abstract
This paper describes methodology to present knowledge of word alignment position related to parallel English-Hindi sentences. This methodology is base to develop the parallel English-Hindi word dictionary after syntactically and semantically analysis of the English-Hindi source text. Methodology of proposed system is using for aligning the English and Hindi sentences; also the methodology can be used for other languages. Large parallel corpus of English-Hindi pair language is not frequently available. Development is based on two strategies to solve this problem. First is normalization of tagged English sentences and Hindi sentences. Second is mapping English-Hindi sentence using parallel English-Hindi word dictionary. Hence proposed system is desirable to encourage English and Hindi parallel sentences.Keywords
Multi Word Expressions, Mapping Score, Tagging, Local Word Grouping, Word Mapping, Normalization, Part of Speech Tagging (Post), Word Dictionary.- Word Alignment to Encourage Outsized English-Hindi Parallel Corpus
Authors
1 Sun Engineering College, Bhilai, IN
2 Dr. CV Raman University, Bhilai, IN
Source
Artificial Intelligent Systems and Machine Learning, Vol 4, No 9 (2012), Pagination: 538-540Abstract
Proposed work gives description about methodology to understand parallel English-Hindi sentences using word alignment. It is part of natural language processing (NLP) where processing of natural language is done to increase understandability of natural language. NLP is part of artificial intelligence (A.I) to develop human intelligence of natural. Various previous works ignore word identities and consider only the sentence lengths which don’t give satisfactory point to exact identification of words, so proposed system is useful to align large outsized parallel corpus by aligning words there. Used methodology is foundation to develop the parallel English-Hindi word dictionary after syntactically and semantically analysis of the English-Hindi source text. Method of proposed system is used for the English and Hindi sentences; moreover the methodology can be used for other languages. Outsized parallel corpus of English-Hindi pair language is not frequently available. Progress is based on two strategies to solve this problem. First is normalization of tagged English sentences and Hindi sentences. Second is mapping English-Hindi sentence using parallel English-Hindi word dictionary. Fortunately word alignment is clearly known and few aligning algorithms are without restraint accessible.Keywords
Tagging, Local Word Grouping, Word Mapping, Normalization, Part of Speech Tagging (Post), Word Dictionary, Multi Word Expressions, Mapping Score.- Alignment of English-Hindi Sentences
Authors
1 Sun Engineering College, Bilaspur, IN
2 Dr. CV Raman University, Bilaspur, IN
Source
Artificial Intelligent Systems and Machine Learning, Vol 4, No 9 (2012), Pagination: 541-543Abstract
In this paper methodology is based on the exploitation of parallel English-Hindi word dictionary after syntactically and semantically analysis of the English-Hindi source text. We are using this methodology for the English and Hindi sentences, but the methodology can also be used for other languages. Large parallel corpus of English-Hindi pair language is not usually available. Therefore proposed system is developed in two strategies to overcome this problem. First strategy is normalization of tagged English sentences and Hindi sentences. Normalization process calculates equal number of words in English-Hindi sentences to find exact alignment of each word. Second strategy is mapping English-Hindi sentence using parallel English-Hindi word dictionary. Parallel English-Hindi dictionary contains normalized English-Hindi word, which are more able to align than previous other approach of alignment. Fortunately, this task, word alignment is well known, and some aligning algorithms are freely available. This provides strong background to this research. Hence proposed system is very successful to understand meaning of each expression generates by human being in form of natural language.Keywords
Normalization, Tagging, Local Word Grouping, Word Mapping, Part of Speech, Word Dictionary, Multi Word Expressions.- Effective Fuzzy Clustering Technique in Intelligent Decision Support System
Authors
1 Department of Computer Science, Govt. E. Raghavendra Rao P.G. Science College, Bilaspur, IN
2 Department of Computer Science, Dr. C.V. Raman University College, Bilaspur, IN
Source
Artificial Intelligent Systems and Machine Learning, Vol 4, No 9 (2012), Pagination: 544-547Abstract
This Paper is based on Fuzzy Logic Techniques and Clustering algorithms and their comparative study, to make huge dataset (clusters) according to the nature of data, using the different Fuzzy Clustering Techniques (FCM, Subtractive Clustering). We are representing the result here in a comparative manure on the basis of different factors like, cluster center and degree of Membership. Data Clustering is the process of dividing data elements into Classes or Clusters so that items in the same Class are as similar as possible and items in different Classes are as dissimilar as possible. In fuzzy Clustering, the data points can belong to more then one cluster and associated with each of the points are membership grades which indicate the degree to which the data points belong to the different clusters. The Fuzzy Clustering Techniques are used for the Research Work because this is most widely used for developing an intelligent system. The model conforms that the Subtractive Clustering algorithm is better then FCM (Fuzzy C-Means) algorithm by using the MATLAB (Matrix Laboratory) software environment.Keywords
Data Set, Fuzzy Clustering, Fuzzy C-Means Clustering, Membership Function, Subtractive Clustering.- Developing a Face Recognition System Using Principal Component Analysis and Radial Basis Function Network
Authors
1 Department of Computer Science & Engineering, MAT'S University, Raipur, IN
2 Department of Information Technology, Shankaracharya College of Engineering & Technology, Bhilai, IN
3 Department of Physics, MAT'S University, Raipur, IN
4 Department of Computer Science, Shankaracharya College of Engg. & Technology, Bhilai, IN